Russ Allbery: Review: A Study in Honor
Series: | Janet Watson Chronicles #1 |
Publisher: | Harper Voyager |
Copyright: | July 2018 |
ISBN: | 0-06-269932-6 |
Format: | Kindle |
Pages: | 295 |
Series: | Janet Watson Chronicles #1 |
Publisher: | Harper Voyager |
Copyright: | July 2018 |
ISBN: | 0-06-269932-6 |
Format: | Kindle |
Pages: | 295 |
Publisher: | Fairwood Press |
Copyright: | November 2023 |
ISBN: | 1-958880-16-7 |
Format: | Kindle |
Pages: | 257 |
Publisher: | Princeton University Press |
Copyright: | 2006, 2008 |
Printing: | 2008 |
ISBN: | 0-691-13640-8 |
Format: | Trade paperback |
Pages: | 278 |
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.
drm-fixes-<date>
.
2) Examine the issue tracker: Confirm that your issue isn t already
documented and addressed in the AMD display driver issue tracker. If you find a
similar issue, you can team up with others and speed up the debugging process.
[drm] Display Core v...
, it s not likely a display driver issue. If this
message doesn t appear in your log, the display driver wasn t fully loaded and
you will see a notification that something went wrong here.[drm] Display Core v3.2.241 initialized on DCN 2.1
[drm] Display Core v3.2.237 initialized on DCN 3.0.1
drivers/gpu/drm/amd/display/dc/dcn301
. We all know
that the AMD s shared code is huge and you can use these boundaries to rule out
codes unrelated to your issue.
7) Newer families may inherit code from older ones: you can find dcn301
using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks
and helpers your driver utilizes to investigate the right portion. You can
leverage ftrace
for supplemental validation. To give an example, it was
useful when I was updating DCN3 color mapping to correctly use their new
post-blending color capabilities, such as:
Additionally, you can use two different HW families to compare behaviours.
If you see the issue in one but not in the other, you can compare the code and
understand what has changed and if the implementation from a previous family
doesn t fit well the new HW resources or design. You can also count on the help
of the community on the
Linux AMD issue tracker
to validate your code on other hardware and/or systems.
This approach helped me debug
a 2-year-old issue
where the cursor gamma adjustment was incorrect in DCN3 hardware, but working
correctly for DCN2 family. I solved the issue in two steps, thanks for
community feedback and validation:
drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c
file. More precisely in
the dcn*_resource_construct()
function.
Using DCN301 for illustration, here is the list of its hardware caps:
/*************************************************
* Resource + asic cap harcoding *
*************************************************/
pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
dc->caps.max_downscale_ratio = 600;
dc->caps.i2c_speed_in_khz = 100;
dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
dc->caps.max_cursor_size = 256;
dc->caps.min_horizontal_blanking_period = 80;
dc->caps.dmdata_alloc_size = 2048;
dc->caps.max_slave_planes = 2;
dc->caps.max_slave_yuv_planes = 2;
dc->caps.max_slave_rgb_planes = 2;
dc->caps.is_apu = true;
dc->caps.post_blend_color_processing = true;
dc->caps.force_dp_tps4_for_cp2520 = true;
dc->caps.extended_aux_timeout_support = true;
dc->caps.dmcub_support = true;
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1;
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1;
dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
dc->caps.color.dpp.dgam_rom_caps.pq = 1;
dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
dc->caps.color.dpp.post_csc = 1;
dc->caps.color.dpp.gamma_corr = 1;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1;
dc->caps.color.dpp.ogam_ram = 1;
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.dp_hdmi21_pcon_support = true;
/* read VBIOS LTTPR caps */
if (ctx->dc_bios->funcs->get_lttpr_caps)
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
if (ctx->dc_bios->funcs->get_lttpr_interop)
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
git log
and git blame
to identify commits
targeting the code section you re interested in.
10) Track regressions: If you re examining the amd-staging-drm-next
branch, check for regressions between DC release versions. These are defined by
DC_VER
in the drivers/gpu/drm/amd/display/dc/dc.h
file. Alternatively,
find a commit with this format drm/amd/display: 3.2.221
that determines a
display release. It s useful for bisecting. This information helps you
understand how outdated your branch is and identify potential regressions. You
can consider each DC_VER
takes around one week to be bumped. Finally, check
testing log of each release in the report provided on the amd-gfx
mailing
list, such as this one Tested-by: Daniel Wheeler
:
sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
/* Surface update type is used by dc_update_surfaces_and_stream
* The update type is determined at the very beginning of the function based
* on parameters passed in and decides how much programming (or updating) is
* going to be done during the call.
*
* UPDATE_TYPE_FAST is used for really fast updates that do not require much
* logical calculations or hardware register programming. This update MUST be
* ISR safe on windows. Currently fast update will only be used to flip surface
* address.
*
* UPDATE_TYPE_MED is used for slower updates which require significant hw
* re-programming however do not affect bandwidth consumption or clock
* requirements. At present, this is the level at which front end updates
* that do not require us to run bw_calcs happen. These are in/out transfer func
* updates, viewport offset changes, recout size changes and pixel
depth changes.
* This update can be done at ISR, but we want to minimize how often
this happens.
*
* UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
* bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
* end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
* gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
* a full update. This cannot be done at ISR level and should be a rare event.
* Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
* underscan we don't expect to see this call at all.
*/
enum surface_update_type
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED, /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
;
amd64
, arm64
, armhf
, i386
, ppc64el
, riscv64
and s390
for Debian trixie, unstable and experimental, this is only around 500GB ie. less than 1%. Although the new service not yet ready for usage, it has already provided a promising outlook in this regard. More information is available on https://rebuilder-snapshot.debian.net and we hope that this service becomes usable in the coming weeks.
The adjacent picture shows a sticky note authored by Jan-Benedict Glaw at the summit in Hamburg, confirming Holger Levsen s theory that rebuilding all Debian packages needs a very small subset of packages, the text states that 69,200 packages (in Debian sid) list 24,850 packages in their .buildinfo
files, in 8,0200 variations. This little piece of paper was the beginning of rebuilder-snapshot and is a direct outcome of the summit!
The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.
[ ] introduce the concepts of Reproducible Builds, including best practices for developing and releasing software, the tools available to help diagnose issues, and touch on progress towards solving decades-old deeply pervasive fundamental security issues Learn how to verify and demonstrate trust, rather than simply hoping everything is OK!Germane to the contents of the talk, the slides for Vagrant s talk can be built reproducibly, resulting in a PDF with a SHA1 of
cfde2f8a0b7e6ec9b85377eeac0661d728b70f34
when built on Debian bookworm and c21fab273232c550ce822c4b0d9988e6c49aa2c3
on Debian sid at the time of writing.
[ ] today I hold in my hands the first two bit-identical LibreOffice rpm packages. And this is the success I wanted to share with you all today [and] it makes me feel as if we can solve anything.
esp32c3
microcontroller firmware reproducible with Rust, repro-env and Arch Linux:
I chose theesp32c3
[board] because it has good Rust support from theesp-rs
project, and you can get a dev board for about 6-8 . To document my build environment I usedrepro-env
together with Arch Linux because its archive is very reliable and contains all the different Rust development tools I needed.
dump
command and hopes that someone may be able to help.
amd64
, arm64
, i386
and armhf
architectures, data is collected from the Reproducible Builds testing framework is collected by this migration software even though, at the time of writing, it neither causes nor migration bonuses nor blocks migration. Indeed, the information only results are visible on Britney s excuses as well as on individual packages pages on tracker.debian.org.
.buildinfo
files
Back in 2017, Steve Langasek filed a bug against Ubuntu s Launchpad code hosting platform to report that .changes
files (artifacts of building Ubuntu and Debian packages) reference .buildinfo
files that aren t actually exposed by Launchpad itself. This was causing issues when attempting to process .changes
files with tools such as Lintian. However, it was noticed last month that, in early August of this year, Simon Quigley had resolved this issue, and .buildinfo
files are now available from the Launchpad system.
composer.lock
file, ensuring total reproducibility of the shipped binary file. Further details and the discussion that went into their particular implementation can be found on the associated GitHub pull request.
In addition, the presentation Leveraging Nix in the PHP ecosystem has been given in late October at the PHP International Conference in Munich by Pol Dellaiera. While the video replay is not yet available, the (reproducible) presentation slides and speaker notes are available.
7z
. [ ]RequiredToolNotFound
import. [ ]252
to Debian unstable. [ ]SOURCE_DATE_EPOCH
and CMake [ ], added iomart (ne Bytemark) and DigitalOcean to our sponsors page [ ] and dropped an unnecessary link on some horizontal navigation buttons [ ].
amber-cli
(date-related issue)bin86
(FTBFS-2038)buildah
(timestamp)colord
(CPU)google-noto-fonts
(file modification issue)grub2
(directory-related metadata)guile-fibers
(parallelism issue)guile-newt
(parallelism issue)gutenprint
(embedded date/hostname)hub
(random build path)ipxe
(nondeterministic behavoiour)joker
/ joker
kopete
(undefined behaviour)kraft
(embedde hostname)libcamera
(signature)libguestfs
(embeds build host file)llvm
(toolchain/Rust-related issue)nfdump
(date-related issue)ovmf
(unknown cause)quazip
(missing fonts)rdflib
(nondeterminstic behaviour)rpm
(toolchain)tigervnc
(embedded an RSA signature)whatsie
(date-related issue)xen
(time-related issue)policycoreutils
(sort-related issue)python-ansible-pygments
.bidict
.meson
.radsecproxy
.taffybar
.php-doc
.pelican
.maildir-utils
.openmrac-data
.vectorscan
.Priority: important
in a new package set. [ ][ ]pool_buildinfos
script to be re-run for a specific year. [ ]osuosl4
node [ ][ ] along with lynxis [ ].amd64
Ionos builders from 48 GiB to 64 GiB; thanks IONOS! [ ]arm64
architecture workers from 24 to 16 in order to improve stability [ ], reduce the workers for amd64
from 32 to 28 and, for i386
, reduce from 12 down to 8 [ ].cache_dir
size setting to 16 GiB. [ ]systemd-oomd
as it unfortunately kills sshd
[ ]debootstrap
from backports when commisioning nodes. [ ]live_build_debian_stretch_gnome
, debsums-tests_buster
and debsums-tests_buster
jobs to the zombie list. [ ][ ]jekyll build
with the --watch
argument when building the Reproducible Builds website. [ ]rc.local
s Bash syntax so it can actually run [ ], commenting away some file cleanup code that is (potentially) deleting too much [ ] and fixing the html_brekages
page for Debian package builds [ ]. Finally, diagnosed and submitted a patch to add a AddEncoding gzip .gz
line to the tests.reproducible-builds.org Apache configuration so that Gzip files aren t re-compressed as Gzip which some clients can t deal with (as well as being a waste of time). [ ]
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
lsos()
from a
StackOverflow question from 2009 (!!), the overbought/oversold
price band plotter from an older blog post, the market monitor
blogged about as well as the checkCRANStatus()
function
tweeted about by Tim
Taylor. And more so take a look.
This release brings a number of updates, including a rather nice
improvement to the market monitor
making updates buttery smooth and not flickering (with big
thanks to Paul Murrell who calmly pointed out once again that
base R does of course have the functionality I was seeking) as well as
three new functions (!!) and then a little maintenance on the
-Wformat
print format string issue that kept everybody
busy this week.
The NEWS entry follows.
Courtesy of my CRANberries, there is a comparison to [the previous release][previous releases]. For questions or comments use the the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in version 0.0.16 (2023-12-02)
- Added new function
str.language()
based on post by Bill Dunlap- Added new argument
sleep
inintradayMarketMonitor
- Switched to
dev.hold()
anddev.flush()
inintradayMarketMonitor
with thanks to Paul Murrell- Updated continued integration setup, twice, and package badges
- Added new function
shadowedPackages
- Added new function
limitDataTableCores
- Updated two
error()
calls to updated tidyCpp signature to not tickle-Wformat
warnings under R-devel- Updated two URL to please link checks in R-devel
- Switch two tests for variable of variable to
is.*
andinherits()
, respectively
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
-Wformat -Wformat-security
from the development branch of R. It also includes a new example snippet
illustrating creation of a numeric matrix.
The NEWS entry follows.
Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in tidyCpp version 0.0.7 (2023-11-30)
- Add an example for a numeric matrix creator
- Update the continuous integration setup
- Accomodate print format warnings from r-devel
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
$ debputy plugin list automatic-discard-rules
+-----------------------+-------------+
Name Provided By
+-----------------------+-------------+
python-cache-files debputy
la-files debputy
backup-files debputy
version-control-paths debputy
gnu-info-dir-file debputy
debian-dir debputy
doxygen-cruft-files debputy
+-----------------------+-------------+
$ debputy plugin show automatic-discard-rules la-files
Automatic Discard Rule: la-files
================================
Documentation: Discards any .la files beneath /usr/lib
Example
-------
/usr/lib/libfoo.la << Discarded (directly by the rule)
/usr/lib/libfoo.so.1.0.0
api.automatic_discard_rule(
"la-files",
_debputy_prune_la_files,
rule_reference_documentation="Discards any .la files beneath /usr/lib",
examples=automatic_discard_rule_example(
"usr/lib/libfoo.la",
("usr/lib/libfoo.so.1.0.0", False),
),
)
# Output if the code or example is broken
$ debputy plugin show automatic-discard-rules la-files
[...]
Automatic Discard Rule: la-files
================================
Documentation: Discards any .la files beneath /usr/lib
Example
-------
/usr/lib/libfoo.la !! INCONSISTENT (code: keep, example: discard)
/usr/lib/libfoo.so.1.0.0
debputy: warning: The example was inconsistent. Please file a bug against the plugin debputy
$ debputy plugin list plugable-manifest-rules
+-------------------------------+------------------------------+-------------+
Rule Name Rule Type Provided By
+-------------------------------+------------------------------+-------------+
install InstallRule debputy
install-docs InstallRule debputy
install-examples InstallRule debputy
install-doc InstallRule debputy
install-example InstallRule debputy
install-man InstallRule debputy
discard InstallRule debputy
move TransformationRule debputy
remove TransformationRule debputy
[...] [...] [...]
remove DpkgMaintscriptHelperCommand debputy
rename DpkgMaintscriptHelperCommand debputy
cross-compiling ManifestCondition debputy
can-execute-compiled-binaries ManifestCondition debputy
run-build-time-tests ManifestCondition debputy
[...] [...] [...]
+-------------------------------+------------------------------+-------------+
$ debputy plugin show plugable-manifest-rules install
Generic install ( install )
===========================
The generic install rule can be used to install arbitrary paths into packages
and is *similar* to how dh_install from debhelper works. It is a two "primary" uses.
1) The classic "install into directory" similar to the standard dh_install
2) The "install as" similar to dh-exec 's foo => bar feature.
Attributes:
- source (conditional): string
sources (conditional): List of string
A path match ( source ) or a list of path matches ( sources ) defining the
source path(s) to be installed. [...]
- dest-dir (optional): string
A path defining the destination *directory*. [...]
- into (optional): string or a list of string
A path defining the destination *directory*. [...]
- as (optional): string
A path defining the path to install the source as. [...]
- when (optional): manifest condition (string or mapping of string)
A condition as defined in [Conditional rules](https://salsa.debian.org/debian/debputy/-/blob/main/MANIFEST-FORMAT.md#Conditional rules).
This rule enforces the following restrictions:
- The rule must use exactly one of: source , sources
- The attribute as cannot be used with any of: dest-dir , sources
[...]
$ debputy plugin list manifest-variables
+----------------------------------+----------------------------------------+------+-------------+
Variable (use via: NAME ) Value Flag Provided by
+----------------------------------+----------------------------------------+------+-------------+
DEB_HOST_ARCH amd64 debputy
[... other DEB_HOST_* vars ...] [...] debputy
DEB_HOST_MULTIARCH x86_64-linux-gnu debputy
DEB_SOURCE debputy debputy
DEB_VERSION 0.1.8 debputy
DEB_VERSION_EPOCH_UPSTREAM 0.1.8 debputy
DEB_VERSION_UPSTREAM 0.1.8 debputy
DEB_VERSION_UPSTREAM_REVISION 0.1.8 debputy
PACKAGE <package-name> debputy
path:BASH_COMPLETION_DIR /usr/share/bash-completion/completions debputy
+----------------------------------+----------------------------------------+------+-------------+
+-----------------------+--------+-------------------------------------------------------+
Variable type Value Option
+-----------------------+--------+-------------------------------------------------------+
Token variables hidden --show-token-variables OR --show-all-variables
Special use variables hidden --show-special-case-variables OR --show-all-variables
+-----------------------+--------+-------------------------------------------------------+
$ debputy plugin show manifest-variables path:BASH_COMPLETION_DIR
Variable: path:BASH_COMPLETION_DIR
==================================
Documentation: Directory to install bash completions into
Resolved: /usr/share/bash-completion/completions
Plugin: debputy
.gitignore
file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py
, from September 2012.
Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format
command that would apply clang-format-diff
to the output from hg diff
. Obviously, running hg diff
on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014.
A year later, when the initial implementation of mach artifact
was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact
was eventually added 14 months later, in March 2016.
From gecko-dev to git-cinnabar
Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands.
Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now).
Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence.
Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch).
One Firefox repository to rule them all
For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them.
And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though.
I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such.
In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary.
7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository.
Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated.
Hosting is simple, right?
Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones).
And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches.
Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication.
Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar).
Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is.
"But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way.
And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while).
Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016.
The growing difficulty of maintaining the status quo
Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin.
Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things).
On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option.
But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one.
Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours).
I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem.
Taking the leap
I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective.
Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more.
It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different.
You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved.
And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?).
Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git).
Heck, even Microsoft moved to Git!
With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems.
But... GitHub?
I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far.
After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those).
Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that).
I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened).
"But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion.
At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then.
So, what's next?
The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not.
While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now:
git-cinnabar
where mach bootstrap
would install it.
$ mkdir -p ~/.mozbuild/git-cinnabar
$ cd ~/.mozbuild/git-cinnabar
$ curl -sOL https://raw.githubusercontent.com/glandium/git-cinnabar/master/download.py
$ python3 download.py && rm download.py
git-cinnabar
to your PATH
. Make sure to also set that wherever you keep your PATH
up-to-date (.bashrc
or wherever else).
$ PATH=$PATH:$HOME/.mozbuild/git-cinnabar
$ git init
$ git remote add origin https://github.com/mozilla/gecko-dev
$ git remote update origin
$ git remote set-url origin hg::https://hg.mozilla.org/mozilla-unified
$ git config --local remote.origin.cinnabar-refs bookmarks
$ git remote update origin --prune
$ git -c cinnabar.refs=heads fetch hg::$PWD refs/heads/default/*:refs/heads/hg/*
This will create a bunch of hg/<sha1>
local branches, not all relevant to you (some come from old branches on mozilla-central). Note that if you're using Mercurial MQ, this will not pull your queues, as they don't exist as heads in the Mercurial repo. You'd need to apply your queues one by one and run the command above for each of them.$ git -c cinnabar.refs=bookmarks fetch hg::$PWD refs/heads/*:refs/heads/hg/*
This will create hg/<bookmark_name>
branches.
$ git reset $(git cinnabar hg2git $(hg log -r . -T ' node '))
This will take a little moment because Git is going to scan all the files in the tree for the first time. On the other hand, it won't touch their content or timestamps, so if you had a build around, it will still be valid, and mach build
won't rebuild anything it doesn't have to.
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg
directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure
before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post.
If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them.
Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far).
What about git-cinnabar?
With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained.
I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches).
Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there.
So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either.
Final words
That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;)
So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change.
To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire.
And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.
Ubuntu 23.10 Mantic Minotaur Desktop, showing network settings
We released Ubuntu 23.10 Mantic Minotaur on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.
Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the single source of truth for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/
, using Netplan s common and declarative YAML format.
/etc/netplan/
. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.
/etc/NetworkManager/system-connections/
will automatically and transparently be migrated to Netplan s declarative YAML format and stored in its common configuration directory /etc/netplan/
.
The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as sudo netplan get
or sudo netplan status
without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:
Setting up network-manager (1.44.2-1ubuntu1.1) ...
Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan
Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan
Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan
In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.
if [ "$TERM" == "xterm-kitty" ]; then alias icat='kitty +kitten icat' fiThe kitten interface can be supported by other programs. The version of the mpv video player in Debian/Unstable has a --vo=kitty option which is an interesting feature. However playing a video in a Kitty window that takes up 1/4 of the screen on my laptop takes a bit over 100% of a CPU core for mpv and about 10% to 20% for Kitty which gives a total of about 120% CPU use on my i5-6300U compared to about 20% for mpv using wayland directly. The option to make it talk to Kitty via shared memory doesn t improve things. Using this effectively requires installing the kitty-terminfo package on every system you might ssh to. But you can set the term type to xterm-256color when logged in to a system without the kitty terminfo installed. The fact that icat and presumably other advanced terminal functions work over ssh by default is a security concern, but this also works with Konsole and will presumably be added to other terminal emulators so it s a widespread problem that needs attention. There is support for desktop notifications in the Kitty terminal encoding [2]. One of the things I m interested in at the moment is how to best manage notifications on converged systems (phone and desktop) so this is something I ll have to investigate. Overall Kitty has some great features and definitely has the potential to improve productivity for some work patterns. There are some security concerns that it raises through closer integration between systems and between programs, but many of them aren t exclusive to Kitty.
In a bowl, mix together and work well:Saturday: around 18:00cover to rise.
- 250 g water;
- 400 g flour;
- 8 g salt;
In a small bowl, mix together:Saturday: around 21:00
- 2-3 g yeast;
- 10 g water;
- 10 g flour.
In the bowl with the original dough, add the contents of the small bowl plus:Sunday: around 8:00and work well; cover to rise overnight.
- 100 g flour;
- 100 g water;
Pour the dough on a lined oven tray, leave in the cold oven to rise.Sunday: around 11:00
Remove the tray from the oven, preheat the oven to 240 C, bake for 10 minutes, then lower the temperature to 160 C and bake for 20 more minutes. Waiting until it has cooled down a bit will make it easier to cut, but is not strictly necessary.I ve had up to a couple of hours variations in the times listed, with no ill effects.
Publisher: | W.W. Norton & Company |
Copyright: | 2023 |
ISBN: | 1-324-07434-5 |
Format: | Kindle |
Pages: | 255 |
arduino
was in Debian.
But it turns out that the Debian package s version doesn t support the DigiSpark. (AFAICT from the list it offered me, I m not sure it supports any ATTiny85 board.) Also, disturbingly, its board manager seemed to be offering to install board support, suggesting it would download stuff from the internet and run it. That wouldn t be acceptable for my main laptop.
I didn t expect to be doing much programming or debugging, and the project didn t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security video laptop . That s the machine where I m prepared to say yes to installing random software off the internet.
So I went to the upstream Arduino site and downloaded a tarball containing the Arduino IDE. After unpacking that in /opt
it ran and produced a pointy-clicky IDE, as expected. I had already found a 3rd-party tutorial saying I needed to add a magic URL (from the DigiSpark s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what.
However, my tiny test program didn t make it to the board. Half-buried in a too-small window was an error message about the board s bootloader ( Micronucleus ) being too new.
The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark s board package?) still contains an old version. So now we have all the downsides of curl bash
-ware, but we re lacking the it s up to date and it just works upsides.
Further digging found some random forum posts which suggested simply downloading a newer micronucleus and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,)
So, whatever . I did that. And it worked!
Having demo d my ability to run code on the board, I set about writing my program.
Writing C again
The programming language offered via the Arduino IDE is C.
It s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C s primitiveness quickly started to grate, and the program couldn t easily be as DRY as I wanted (Don t Repeat Yourself, see Wilson et al, 2012, 4, p.6). But, I carried on; after all, this was going to be quite a small job.
Soon enough I had a program that looked right and compiled.
Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that #include
d my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, make
, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn t seem to offer it, and I wasn t inclined to go further down that kind of path.)
So I got the video laptop out, and used the Arduino IDE to flash the program. It didn t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows:
Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren t the default in C in general javax.jmdns
, with hex DNS packet dumps. WTF.
Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put nearby its sketch (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with git checkout
). It wasn t really very suited to a workflow where principal development occurs elsewhere.
And, important settings such as the project s clock speed, or even the target board, or the compiler warning settings to use weren t stored in the project directory along with the actual code. I didn t look too hard, but I presume they must be in a dotfile somewhere. This is madness.
Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.)
As for the integer overflow bug:
I didn t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is possible, but not really documented?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface).
Instead, I thought, if only I had written the thing in Rust . But that wasn t possible, was it? Does Rust even support this board?
Rust on the DigiSpark
I did a cursory web search and found a very useful blog post by Dylan Garrett.
This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the privsep arrangement I use to protect myself when developing using upstream cargo packages from crates.io
.
I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my rustup
to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the trinket LED blink example, referenced by Dylan s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board!
I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the upstream Micronucleus git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop.
I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior.
So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the path
references.
That didn t work. Now it didn t build. Even after I copied about .cargo/config.toml
and rust-toolchain.toml
it didn t build, producing a variety of exciting messages, depending what precisely I tried. I don t have detailed logs of my flailing: the instructions say to build it by cd
ing to the subdirectory, and, given that what I was trying to do was to not follow those instructions, it didn t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it s not usually fun. Looking at some of the build control files, things seemed quite complicated.
Additionally, not all of the crates are on crates.io
. I have no idea why not. So, I would need to supply local copies of them anyway. I decided to just git subtree add
the avr-hal
git tree.
(That seemed better than the approach taken by the avr-hal project s cargo template, since that template involve a cargo dependency on a foreign git
repository. Perhaps it would be possible to turn them into path
dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don t like package templates very much. They re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.)
Since I couldn t get things to build outside avr-hal
, I edited the example, within avr-hal
, to refer to my (one) program.rs
file outside avr-hal
, with a #[path]
instruction. That s not pretty, but it worked.
I also had to write a nasty shell script to work around the lack of good support in my nailing-cargo
privsep tool for builds where cargo
must be invoked in a deep subdirectory, and/or Cargo.lock
isn t where it expects, and/or the target
directory containing build products is in a weird place. It also has to filter the output from cargo
to adjust the pathnames in the error messages. Otherwise, running both cd A; cargo build
and cd B; cargo build
from a Makefile
produces confusing sets of error messages, some of which contain filenames relative to A
and some relative to B
, making it impossible for my Emacs to reliably find the right file.
RIIR (Rewrite It In Rust)
Having got my build tooling sorted out I could go back to my actual program.
I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting).
Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away!
RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust s greater power, compared to C, made those cleanups easier, so making them worthwhile.
However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a ticket against one of the C/C++ libraries supporting the ATTiny85 chip.)
The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn t bear fruit.
So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware s notion of time, I simply changed my delay
function to adjust the passed-in delay values, compensating for the wrong clock speed.
There was probably a more principled way. For example I could have (re)based my work on either of the two unmerged open MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms.
An offer of help
As will be obvious from this posting, I m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.)
But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions.
An obvious candidate: it seems to me that micronucleus
could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the arduino
package.
Unfortunately, IMO Debian s Rust packaging tooling and workflows are very poor, and the first of my suggestions for improvement wasn t well received. So if you need help with improving Rust packages in Debian, please talk to the Debian Rust Team yourself.
Conclusions
Embedded programming is still rather a mess and probably always will be.
Embedded build systems can be bizarre. Documentation is scant. You re often expected to download board support packages full of mystery binaries, from the board vendor (or others).
Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You re way off the beaten track.
As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives.
All is not lost
Rust can be a significantly better bet than C for embedded software:
The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that s precisely what you want.
Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language s normal build system. Overall, less bad software supply chain integrity.
The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)mariadb.sys
user which . doesn t have a password set. It seems to be
locked down in other ways, but my dumb script didn t know about that and
happily deleted the user.
Who needs that mariadb.sys
user anyway?
Apparently we all do. On one server, I can t login as root anymore. On another
server I can login as root, but if I try to list users I get an error:
ERROR 1449 (HY000): The user specified as a definer ( mariadb.sys @ localhost ) does not existThe Internt is full of useless advice. The most common is to simply insert that user. Except
MariaDB [mysql]> CREATE USER mariadb.sys @ localhost ACCOUNT LOCK PASSWORD EXPIRE;
ERROR 1396 (HY000): Operation CREATE USER failed for 'mariadb.sys'@'localhost'
MariaDB [mysql]>
Yeah, that s not going to work.
It seems like we are dealing with two changes. One, the old mysql.user
table
was replaced by the global_priv
table and then turned into
a view for backwards compatibility.
And two, for sensible reasons the
default definer for this view has been changed from the root user to a user that,
ahem, is unlikely to be changed or deleted.
Apparently I can t add the mariadb.sys
user because it would alter the user
view which has a definer that doesn t exist. Although not sure if this really is
the reason?
Fortunately, I found an excellent
suggestion for changing the definer of a
view. My modified version of the answer is, run the following command which
will generate a SQL statement:
SELECT CONCAT("ALTER DEFINER=root@localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'mariadb.sys@localhost';
Then, execute the statement.
And then also update the mysql.proc
table:
UPDATE mysql.proc SET definer = 'root@localhost' WHERE definer = 'mariadb.sys@localhost';
And lastly, I had to run:
DELETE FROM tables_priv WHERE User = 'mariadb.sys';
FLUSH privileges;
Wait, was the tables_priv
entry the whole problem all along? Not sure. But now I can run:
CREATE USER mariadb.sys @ localhost ACCOUNT LOCK PASSWORD EXPIRE;
GRANT SELECT, DELETE ON mysql . global_priv TO mariadb.sys @ localhost ;
And reverse the other statements:
SELECT CONCAT("ALTER DEFINER= mariadb.sys @localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'root@localhost';
[Execute the output.]
UPDATE mysql.proc SET definer = 'mariadb.sys@localhost' WHERE definer = 'root@localhost';
And while we re on the topic of borked MariaDB authentication, here are the
steps to change the root password and restore all root privielges if you can t
get in at all or your root user is missing the GRANT OPTION (you can change
ALTER to CREATE if the root user does not even exist):
systemctl stop mariadb
mariadbd-safe --skip-grant-tables --skip-networking &
mysql -u root
[mysql]> FLUSH PRIVILEGES
[mysql]> ALTER USER root @ localhost IDENTIFIED VIA mysql_native_password USING PASSWORD('your-secret-password') OR unix_socket;
[mysql]> GRANT ALL PRIVILEGES ON *.* to 'root'@'localhost' WITH GRANT OPTION;
mariadbd-admin shutdown
systemctl start mariadb
systemd
unit files if
dh_installsystemd
and systemd.pc
change their installation targets.
Unfortunately, doing so makes some packages FTBFS and therefore
patches have been filed.
The analysis tool, dumat
, has been enhanced to better understand
which upgrade scenarios are considered supported
to reduce false positive bug filings and gained a mode for
local operation on a .changes
file
meant for inclusion in salsa-ci. The filing of bugs from dumat
is still
manual to improve the quality of reports.
Since September, the moratorium
has been lifted.
debvm
enabling its
use in autopkgtests
.autopkgtest-build-qemu
. It is powered by mmdebstrap
, therefore
unprivileged, EFI-only and will soon be
included in mmdebstrap.Next.